Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
具有平均社会认知水平的人类可以仅根据非语言交流信号(例如,目光,手势,姿势和上下文信息)来推断他人的信念。这种预测人类信念和意图的社会认知能力对于确保安全的人类机器人互动和协作比以往任何时候都更为重要。本文使用了心理理论(TOM)和对象文本关系的结合知识来研究在禁止语言交流的环境中增强人与自主系统之间协作的方法。我们提出了一个新颖而富有挑战性的多模式视频数据集,用于评估人工智能(AI)系统在对象文化场景中预测人类信念状态方面的能力。所提出的数据集包括对人类信念的精确标记状态基地真实和​​多模式输入,这些输入复制了人类感知捕获的所有非语言交流输入。我们通过现有的深度学习模型进一步评估数据集,并提供有关各种输入模式和对象语言关系对基线模型性能的影响的新见解。
translated by 谷歌翻译
实际上,寻求帮助通常比搜索整个空间更有效,以找到一个未知位置的对象。我们提出了一个学习框架,该框架使代理商能够在此类具体的视觉导航任务中积极寻求帮助,其中反馈将其视为目标的位置。为了模仿老师可能并不总是在场的现实情况,我们提出了一项培训课程,而反馈并不总是可用。我们制定了目标的不确定性度量,并使用经验结果表明,通过这种方法,代理商将在没有反馈时保持有效的帮助,同时保持强大的帮助。
translated by 谷歌翻译
有时将儿童的认知能力视为AI基准。在自然主义儿童的环境中,如何学习最常见的1,000个概念(每天使用的89%)?儿童的认知发展是关于质量的,可以通过简单的例子传达新概念。我们的知识脚手架方法使用简单的对象和动作来传达概念,例如如何教授孩子。我们介绍了ABCDE,这是一种以典型的儿童游戏室为基础的交互式3D环境。它带有300多个唯一的3D对象资产(主要是玩具),以及一个宽敞的动作空间,可供孩子和父代理与对象互动。ABCDE是旨在模仿儿童认知发展的自然主义环境的第一个环境。没有其他环境通过学习者的互动来研究高级概念学习。可以在https://pypi.org/project/abcdesim/1.0.0/上找到模拟器
translated by 谷歌翻译
精确预测物理交互结果是人类智能的关键组成部分,对于真实世界中的机器人安全和有效地部署是重要的。虽然存在基于视觉的直观物理模型,用于学习预测物理交互结果,而它们主要专注于根据从视觉输入或潜在空间提取的物理性质(例如质量,摩擦和速度)产生未来框架的短序列。然而,缺乏直观的物理模型,这些模型是在具有不同对象之间的多个交互的长物理相互作用序列上进行测试。我们假设在近似精神模拟期间的选择性时间关注有助于人类在物理相互作用结果预测中。通过这些动机,我们提出了一种新颖的方案:通过用跨度选择(PIP)通过精神模拟物理交互预测。它利用深度生成模型来模拟近似精神模拟,通过在采用跨度选择的形式以预测物理交互结果的形式中采用选择性的时间关注之前产生近似的物理相互作用。为了评估我们的模型,我们进一步提出了具有3D环境中的三个主要物理交互的长序列的大规模空间+数据集。我们的实验表明,PIP优于利用精神模拟的人类,基线和相关直观的物理模型。此外,PIP的跨度选择模块有效地识别指示对象之间的关键物理交互的帧,允许添加额外的解释性。
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
We present BotSIM, a data-efficient end-to-end Bot SIMulation toolkit for commercial text-based task-oriented dialog (TOD) systems. BotSIM consists of three major components: 1) a Generator that can infer semantic-level dialog acts and entities from bot definitions and generate user queries via model-based paraphrasing; 2) an agenda-based dialog user Simulator (ABUS) to simulate conversations with the dialog agents; 3) a Remediator to analyze the simulated conversations, visualize the bot health reports and provide actionable remediation suggestions for bot troubleshooting and improvement. We demonstrate BotSIM's effectiveness in end-to-end evaluation, remediation and multi-intent dialog generation via case studies on two commercial bot platforms. BotSIM's "generation-simulation-remediation" paradigm accelerates the end-to-end bot evaluation and iteration process by: 1) reducing manual test cases creation efforts; 2) enabling a holistic gauge of the bot in terms of NLU and end-to-end performance via extensive dialog simulation; 3) improving the bot troubleshooting process with actionable suggestions. A demo of our system can be found at https://tinyurl.com/mryu74cd and a demo video at https://youtu.be/qLi5iSoly30. We have open-sourced the toolkit at https://github.com/salesforce/botsim
translated by 谷歌翻译
作者归因是确定给定文本的作者的任务。大多数现有方法都使用手动设计的功能来捕获数据集的内容和样式。但是,这种依赖数据集的方法会产生不一致的性能。因此,我们建议使用对比度学习和监督学习(Contra-X)的结合来微调预训练的语言表示。我们表明,Contra-X在多个人类和机器作者身份归因基准上提高了最先进的方法,从而提高了高达6.8%的改善。我们还表明,在不同的数据方案中,Contra-X始终优于跨凝性微调。至关重要的是,我们介绍了这些改进的定性和定量分析。我们博学的表示形成了不同作者的高度可分开的群集。但是,我们发现对比度学习以牺牲某些作者的牺牲成本提高了整体准确性。解决这种紧张关系将是未来工作的重要方向。据我们所知,我们是第一个分析将对比度学习与跨凝性微调相结合的作者归因的效果。
translated by 谷歌翻译